96 research outputs found

    Computational Evidence for Laboratory Diagnostic Pathways: Extracting Predictive Analytes for Myocardial Ischemia from Routine Hospital Data

    Get PDF
    Background: Laboratory parameters are critical parts of many diagnostic pathways, mortality scores, patient follow-ups, and overall patient care, and should therefore have underlying standardized, evidence-based recommendations. Currently, laboratory parameters and their significance are treated differently depending on expert opinions, clinical environment, and varying hospital guidelines. In our study, we aimed to demonstrate the capability of a set of algorithms to identify predictive analytes for a specific diagnosis. As an illustration of our proposed methodology, we examined the analytes associated with myocardial ischemia; it was a well-researched diagnosis and provides a substrate for comparison. We intend to present a toolset that will boost the evolution of evidence-based laboratory diagnostics and, therefore, improve patient care. Methods: The data we used consisted of preexisting, anonymized recordings from the emergency ward involving all patient cases with a measured value for troponin T. We used multiple imputation technique, orthogonal data augmentation, and Bayesian Model Averaging to create predictive models for myocardial ischemia. Each model incorporated different analytes as cofactors. In examining these models further, we could then conclude the predictive importance of each analyte in question. Results: The used algorithms extracted troponin T as a highly predictive analyte for myocardial ischemia. As this is a known relationship, we saw the predictive importance of troponin T as a proof of concept, suggesting a functioning method. Additionally, we could demonstrate the algorithm’s capabilities to extract known risk factors of myocardial ischemia from the data. Conclusion: In this pilot study, we chose an assembly of algorithms to analyze the value of analytes in predicting myocardial ischemia. By providing reliable correlations between the analytes and the diagnosis of myocardial ischemia, we demonstrated the possibilities to create unbiased computational-based guidelines for laboratory diagnostics by using computational power in today’s era of digitalization

    Big Data in Laboratory Medicine—FAIR Quality for AI?

    Get PDF
    Laboratory medicine is a digital science. Every large hospital produces a wealth of data each day—from simple numerical results from, e.g., sodium measurements to highly complex output of “-omics” analyses, as well as quality control results and metadata. Processing, connecting, storing, and ordering extensive parts of these individual data requires Big Data techniques. Whereas novel technologies such as artificial intelligence and machine learning have exciting application for the augmentation of laboratory medicine, the Big Data concept remains fundamental for any sophisticated data analysis in large databases. To make laboratory medicine data optimally usable for clinical and research purposes, they need to be FAIR: findable, accessible, interoperable, and reusable. This can be achieved, for example, by automated recording, connection of devices, efficient ETL (Extract, Transform, Load) processes, careful data governance, and modern data security solutions. Enriched with clinical data, laboratory medicine data allow a gain in pathophysiological insights, can improve patient care, or can be used to develop reference intervals for diagnostic purposes. Nevertheless, Big Data in laboratory medicine do not come without challenges: the growing number of analyses and data derived from them is a demanding task to be taken care of. Laboratory medicine experts are and will be needed to drive this development, take an active role in the ongoing digitalization, and provide guidance for their clinical colleagues engaging with the laboratory data in research

    Real-world Health Data and Precision for the Diagnosis of Acute Kidney Injury, Acute-on-Chronic Kidney Disease, and Chronic Kidney Disease: Observational Study.

    Get PDF
    BACKGROUND The criteria for the diagnosis of kidney disease outlined in the Kidney Disease: Improving Global Outcomes guidelines are based on a patient's current, historical, and baseline data. The diagnosis of acute kidney injury, chronic kidney disease, and acute-on-chronic kidney disease requires previous measurements of creatinine, back-calculation, and the interpretation of several laboratory values over a certain period. Diagnoses may be hindered by unclear definitions of the individual creatinine baseline and rough ranges of normal values that are set without adjusting for age, ethnicity, comorbidities, and treatment. The classification of correct diagnoses and sufficient staging improves coding, data quality, reimbursement, the choice of therapeutic approach, and a patient's outcome. OBJECTIVE In this study, we aim to apply a data-driven approach to assign diagnoses of acute, chronic, and acute-on-chronic kidney diseases with the help of a complex rule engine. METHODS Real-time and retrospective data from the hospital's clinical data warehouse of inpatient and outpatient cases treated between 2014 and 2019 were used. Delta serum creatinine, baseline values, and admission and discharge data were analyzed. A Kidney Disease: Improving Global Outcomes-based SQL algorithm applied specific diagnosis-based International Classification of Diseases (ICD) codes to inpatient stays. Text mining on discharge documentation was also conducted to measure the effects on diagnosis. RESULTS We show that this approach yielded an increased number of diagnoses (4491 cases in 2014 vs 11,124 cases of ICD-coded kidney disease and injury in 2019) and higher precision in documentation and coding. The percentage of unspecific ICD N19-coded diagnoses of N19 codes generated dropped from 19.71% (1544/7833) in 2016 to 4.38% (416/9501) in 2019. The percentage of specific ICD N18-coded diagnoses of N19 codes generated increased from 50.1% (3924/7833) in 2016 to 62.04% (5894/9501) in 2019. CONCLUSIONS Our data-driven method supports the process and reliability of diagnosis and staging and improves the quality of documentation and data. Measuring patient outcomes will be the next step in this project

    The BioRef Infrastructure, a Framework for Real-Time, Federated, Privacy-Preserving, and Personalized Reference Intervals: Design, Development, and Application.

    Get PDF
    BACKGROUND Reference intervals (RIs) for patient test results are in standard use across many medical disciplines, allowing physicians to identify measurements indicating potentially pathological states with relative ease. The process of inferring cohort-specific RIs is, however, often ignored because of the high costs and cumbersome efforts associated with it. Sophisticated analysis tools are required to automatically infer relevant and locally specific RIs directly from routine laboratory data. These tools would effectively connect clinical laboratory databases to physicians and provide personalized target ranges for the respective cohort population. OBJECTIVE This study aims to describe the BioRef infrastructure, a multicentric governance and IT framework for the estimation and assessment of patient group-specific RIs from routine clinical laboratory data using an innovative decentralized data-sharing approach and a sophisticated, clinically oriented graphical user interface for data analysis. METHODS A common governance agreement and interoperability standards have been established, allowing the harmonization of multidimensional laboratory measurements from multiple clinical databases into a unified "big data" resource. International coding systems, such as the International Classification of Diseases, Tenth Revision (ICD-10); unique identifiers for medical devices from the Global Unique Device Identification Database; type identifiers from the Global Medical Device Nomenclature; and a universal transfer logic, such as the Resource Description Framework (RDF), are used to align the routine laboratory data of each data provider for use within the BioRef framework. With a decentralized data-sharing approach, the BioRef data can be evaluated by end users from each cohort site following a strict "no copy, no move" principle, that is, only data aggregates for the intercohort analysis of target ranges are exchanged. RESULTS The TI4Health distributed and secure analytics system was used to implement the proposed federated and privacy-preserving approach and comply with the limitations applied to sensitive patient data. Under the BioRef interoperability consensus, clinical partners enable the computation of RIs via the TI4Health graphical user interface for query without exposing the underlying raw data. The interface was developed for use by physicians and clinical laboratory specialists and allows intuitive and interactive data stratification by patient factors (age, sex, and personal medical history) as well as laboratory analysis determinants (device, analyzer, and test kit identifier). This consolidated effort enables the creation of extremely detailed and patient group-specific queries, allowing the generation of individualized, covariate-adjusted RIs on the fly. CONCLUSIONS With the BioRef-TI4Health infrastructure, a framework for clinical physicians and researchers to define precise RIs immediately in a convenient, privacy-preserving, and reproducible manner has been implemented, promoting a vital part of practicing precision medicine while streamlining compliance and avoiding transfers of raw patient data. This new approach can provide a crucial update on RIs and improve patient care for personalized medicine

    Potential of Dried Blood Self-Sampling for Cyclosporine C2 Monitoring in Transplant Outpatients

    Get PDF
    Background. Close therapeutic drug monitoring of Cyclosporine (CsA) in transplant outpatients is a favourable procedure to maintain the long-term blood drug levels within their respective narrow therapeutic ranges. Compared to basal levels (C0), CsA peak levels (C2) are more predictive for transplant rejection. However, the application of C2 levels is hampered by the precise time of blood sampling and the need of qualified personnel. Therefore, we evaluated a new C2 self-obtained blood sampling in transplant outpatients using dried capillary and venous blood samples and compared the CsA levels, stability, and clinical practicability of the different procedures. Methods. 55 solid organ transplant recipients were instructed to use single-handed sampling of each 50 μL capillary blood and dried blood spots by finger prick using standard finger prick devices. We used standardized EDTA-coated capillary blood collection systems and standardized filter paper WS 903. CsA was determined by LC-MS/MS. The patients and technicians also answered a questionnaire on the procedure and sample quality. Results. The C0 and C2 levels from capillary blood collection systems (C0 [ng/mL]: 114.5 ± 44.5; C2: 578.2 ± 222.2) and capillary dried blood (C0 [ng/mL]: 175.4 ± 137.7; C2: 743.1 ± 368.1) significantly (P < .01) correlated with the drug levels of the venous blood samples (C0 [ng/mL]: 97.8 ± 37.4; C2: 511.2 ± 201.5). The correlation at C0 was ρcap.-ven. = 0.749, and ρdried blood-ven = 0.432; at C2: ρcap.-ven. = 0.861 and ρdried blood-ven = 0.711. The patients preferred the dried blood sampling because of the more simple and less painful procedure. Additionally, the sample quality of self-obtained dried blood spots for LC-MS/MS analytics was superior to the respective capillary blood samples. Conclusions. C2 self-obtained dried blood sampling can easily be performed by transplant outpatients and is therefore suitable and cost-effective for close therapeutic drug monitoring

    Collaborative challenges of multi-cohort project in pharmacogenetics - why time is essential for meaningful collaborations.

    Get PDF
    UNSTRUCTURED Multi-cohort projects in medicine provide an opportunity to investigate scientific questions beyond the boundaries of a single institution, and to increase sample size for more reliable results. However, the complications of these kinds of collaborations arise during management, with many administrative hurdles. Hands-on approaches and lessons learned from previous collaborations provide solutions for optimized collaboration models. Here, we use our experience in running the Swiss multi-cohort project PGX-link to show the strategy we used to tackle different challenges from project set up to getting the relevant permits, including ethics approval. We set PGX-link into an international context, since our struggles were similar to those encountered during the SYNCHROS project. We provide ad-hoc solutions for cohorts, general project management strategies, and suggestions for unified protocols between cohorts that would ease current management hurdles. Project managers are not necessarily familiar with medical projects, and even if they are, they are not aware of the intricacies behind decision making, and consequently of the time needed to set up multi-cohort collaborations. This paper is meant to be a brief overview of what we went through with our multi-cohort project and provides the necessary practices for future managers

    Collaborative Challenges of Multi-Cohort Projects in Pharmacogenetics-Why Time Is Essential for Meaningful Collaborations

    Full text link
    Multi-cohort projects in medicine provide an opportunity to investigate scientific questions beyond the boundaries of a single institution and endeavor to increase the sample size for obtaining more reliable results. However, the complications of these kinds of collaborations arise during management, with many administrative hurdles. Hands-on approaches and lessons learned from previous collaborations provide solutions for optimized collaboration models. Here, we use our experience in running PGX-link, a Swiss multi-cohort project, to show the strategy we used to tackle different challenges from project setup to obtaining the relevant permits, including ethics approval. We set PGX-link in an international context because our struggles were similar to those encountered during the SYNCHROS (SYNergies for Cohorts in Health: integrating the ROle of all Stakeholders) project. We provide ad hoc solutions for cohorts, general project management strategies, and suggestions for unified protocols between cohorts that would ease current management hurdles. Project managers are not necessarily familiar with medical projects, and even if they are, they are not aware of the intricacies behind decision-making and consequently, of the time needed to set up multi-cohort collaborations. This paper is meant to be a brief overview of what we experienced with our multi-cohort project and provides the necessary practices for future managers

    Electrolyte disorders and in-hospital mortality during prolonged heat periods: a cross-sectional analysis

    Get PDF
    BACKGROUND Heat periods during recent years were associated with excess hospitalization and mortality rates, especially in the elderly. We intended to study whether prolonged warmth/heat periods are associated with an increased prevalence of disorders of serum sodium and potassium and an increased hospital mortality. METHODS In this cross-sectional analysis all patients admitted to the Department of Emergency Medicine of a large tertiary care facility between January 2009 and December 2010 with measurements of serum sodium were included. Demographic data along with detailed data on diuretic medication, length of hospital stay and hospital mortality were obtained for all patients. Data on daily temperatures (maximum, mean, minimum) and humidity were retrieved by Meteo Swiss. RESULTS A total of 22.239 patients were included in the study. 5 periods with a temperature exceeding 25 °C for 3 to 5 days were noticed and 2 periods with temperatures exceeding 25 °C for more than 5 days were noted. Additionally, 2 periods with 3 to 5 days with daily temperatures exceeding 30 °C were noted during the study period. We found a significantly increased prevalence of hyponatremia during heat periods. However, in the Cox regression analysis, prolonged heat was not associated with the prevalence of disorders of serum sodium or potassium. Admission during a heat period was an independent predictor for hospital mortality. CONCLUSIONS Although we found an increased prevalence of hyponatremia during heat periods, no convincing connection could be found for hypernatremia or disorders of serum potassium
    corecore